216 research outputs found

    Exports, Technical Progress and Productivity Growth in Chinese Manufacturing Industries

    Get PDF
    Theories suggesting either static or dynamic productivity gains derived from exports often assume the prior existence of a perfect market. In the presence of market failure, however, the competition effect and the resource reallocation effect of exports on productive efficiency may be greatly reduced; and there may actually be disincentives for innovation. This paper analyses the impact of exports on total factor productivity (TFP) growth in a transition economy using a panel of Chinese manufacturing industries over the period 1990-1997. TFP growth is estimated by employing a non-parametric approach and is decomposed into technical progress and efficiency change. We have not found evidence suggesting significant productivity gains at the industry level resulting from exports. Findings of the current study suggest that, for exports to generate significant positive effect on TFP growth, a well?developed domestic market and a neutral, outward-oriented policy are necessary.exports, industrial efficiency, technical progress, productivity

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness

    The Sensitive Artificial Listner: an induction technique for generating emotionally coloured conversation

    Get PDF
    The aim of the paper is to document and share an induction technique (The Sensitive Artificial Listener) that generates data that can be both tractable and reasonably naturalistic. The technique focuses on conversation between a human and an agent that either is or appears to be a machine. It is designed to capture a broad spectrum of emotional states, expressed in ‘emotionally coloured discourse’ of the type likely to be displayed in everyday conversation. The technique is based on the observation that it is possible for two people to have a conversation in which one pays little or no attention to the meaning of what the other says, and chooses responses on the basis of superficial cues. In SAL, system responses take the form of a repertoire of stock phrases keyed to the emotional colouring of what the user says. The technique has been used to collect data of sufficient quantity and quality to train machine recognition systems

    Emotion and mental state recognition from speech

    Get PDF

    Issues in data labelling

    Get PDF

    AV+ EC 2015--the first affect recognition challenge bridging across audio, video, and physiological data

    Get PDF
    We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+EC 2015) aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological emotion analysis. This is the 5th event in the AVEC series, but the very first Challenge that bridges across audio, video and physiological data. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. This paper presents the challenge, the dataset and the performance of the baseline system

    AV+ EC 2015--the first affect recognition challenge bridging across audio, video, and physiological data

    Get PDF
    We present the first Audio-Visual+ Emotion recognition Challenge and workshop (AV+EC 2015) aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological emotion analysis. This is the 5th event in the AVEC series, but the very first Challenge that bridges across audio, video and physiological data. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare the relative merits of the three approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. This paper presents the challenge, the dataset and the performance of the baseline system

    The ordinal nature of emotions

    Get PDF
    Representing computationally everyday emotional states is a challenging task and, arguably, one of the most fundamental for affective computing. Standard practice in emotion annotation is to ask humans to assign an absolute value of intensity to each emotional behavior they observe. Psychological theories and evidence from multiple disciplines including neuroscience, economics and artificial intelligence, however, suggest that the task of assigning reference-based (relative) values to subjective notions is better aligned with the underlying representations than assigning absolute values. Evidence also shows that we use reference points, or else anchors, against which we evaluate values such as the emotional state of a stimulus; suggesting again that ordinal labels are a more suitable way to represent emotions. This paper draws together the theoretical reasons to favor relative over absolute labels for representing and annotating emotion, reviewing the literature across several disciplines. We go on to discuss good and bad practices of treating ordinal and other forms of annotation data, and make the case for preference learning methods as the appropriate approach for treating ordinal labels. We finally discuss the advantages of relative annotation with respect to both reliability and validity through a number of case studies in affective computing, and address common objections to the use of ordinal data. Overall, the thesis that emotions are by nature relative is supported by both theoretical arguments and evidence, and opens new horizons for the way emotions are viewed, represented and analyzed computationally.peer-reviewe

    AVEC 2016 – Depression, mood, and emotion recognition workshop and challenge

    Get PDF
    The Audio/Visual Emotion Challenge and Workshop (AVEC 2016) "Depression, Mood and Emotion" will be the sixth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multi-modal information processing and to bring together the depression and emotion recognition communities, as well as the audio, video and physiological processing communities, to compare the relative merits of the various approaches to depression and emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks

    A computerised test of perceptual ability for learning endoscopic and laparoscopic surgery and other image guided procedures: Score norms for PicSOr

    Get PDF
    Background: The aptitude to infer the shape of 3-D structures, such as internal organs from 2-D monitor displays, in image guided endoscopic and laparoscopic procedures varies. We sought both to validate a computer-generated task Pictorial Surface Orientation (PicSOr), which assesses this aptitude, and to identify norm referenced scores. Methods: 400 subjects (339 surgeons and 61 controls) completed the PicSOr test. 50 subjects completed it again one year afterwards. Results: Complete data was available on 396 of 400 subjects (99%). PicSOr demonstrated high test and re-test reliability (r = 0.807, p < 0.000). Surgeons performed better than controls' (surgeons = 0.874 V controls = 0.747, p < 0.000). Some surgeons (n = 22–5.5%) performed atypically on the test. Conclusions: PicSOr has population distribution scores that are negatively skewed. PicSOr quantitatively characterises an aptitude strongly correlated to the learning and performance of image guided medical tasks. Most can do the PicSOr task almost perfectly, but a substantial minority do so atypically, and this is probably relevant to learning and performing endoscopic tasks
    corecore